794 research outputs found
Globally Guided Progressive Fusion Network for 3D Pancreas Segmentation
Recently 3D volumetric organ segmentation attracts much research interest in
medical image analysis due to its significance in computer aided diagnosis.
This paper aims to address the pancreas segmentation task in 3D computed
tomography volumes. We propose a novel end-to-end network, Globally Guided
Progressive Fusion Network, as an effective and efficient solution to
volumetric segmentation, which involves both global features and complicated 3D
geometric information. A progressive fusion network is devised to extract 3D
information from a moderate number of neighboring slices and predict a
probability map for the segmentation of each slice. An independent branch for
excavating global features from downsampled slices is further integrated into
the network. Extensive experimental results demonstrate that our method
achieves state-of-the-art performance on two pancreas datasets.Comment: MICCAI201
Cutting out the middleman: measuring nuclear area in histopathology slides without segmentation
The size of nuclei in histological preparations from excised breast tumors is
predictive of patient outcome (large nuclei indicate poor outcome).
Pathologists take into account nuclear size when performing breast cancer
grading. In addition, the mean nuclear area (MNA) has been shown to have
independent prognostic value. The straightforward approach to measuring nuclear
size is by performing nuclei segmentation. We hypothesize that given an image
of a tumor region with known nuclei locations, the area of the individual
nuclei and region statistics such as the MNA can be reliably computed directly
from the image data by employing a machine learning model, without the
intermediate step of nuclei segmentation. Towards this goal, we train a deep
convolutional neural network model that is applied locally at each nucleus
location, and can reliably measure the area of the individual nuclei and the
MNA. Furthermore, we show how such an approach can be extended to perform
combined nuclei detection and measurement, which is reminiscent of
granulometry.Comment: Conditionally accepted for MICCAI 201
Registration of prone and supine CT colonography images and its clinical application
Computed tomographic (CT) colonography is a technique for detecting bowel cancer and potentially precancerous polyps. CT imaging is performed on the cleansed and insufflated bowel in order to produce a virtual endoluminal representation similar to optical colonoscopy. Because fluids and stool can mimic pathology, images are acquired with the patient in both prone and supine positions. Radiologists then match endoluminal locations visually between the two acquisitions in order to determine whether pathology is real or not. This process is hindered by the fact that the colon can undergo considerable deformation between acquisitions. Robust and accurate automated registration between prone and supine data acquisitions is therefore pivotal for medical interpretation, but a challenging problem. The method proposed in this thesis reduces the complexity of the registration task of aligning the prone and supine CT colonography acquisitions. This is done by utilising cylindrical representations of the colonic surface which reflect the colon's specific anatomy. Automated alignment in the cylindrical domain is achieved by non-rigid image registration using surface curvatures, applicable even when cases exhibit local luminal collapses. It is furthermore shown that landmark matches for initialisation improve the registration's accuracy and robustness. Additional performance improvements are achieved by symmetric and inverse-consistent registration and iteratively deforming the surface in order to compensate for differences in distension and bowel preparation. Manually identified reference points in human data and fiducial markers in a porcine phantom are used to validate the registration accuracy. The potential clinical impact of the method has been evaluated using data that reflects clinical practise. Furthermore, correspondence between follow-up CT colonography acquisitions is established in order to facilitate the clinical need to investigate polyp growth over time. Accurate registration has the potential to both improve the diagnostic process and decrease the radiologist's interpretation time. Furthermore, its result could be integrated into algorithms for improved computer-aided detection of colonic polyps
Revisiting Rubik's Cube: Self-supervised Learning with Volume-wise Transformation for 3D Medical Image Segmentation
Deep learning highly relies on the quantity of annotated data. However, the
annotations for 3D volumetric medical data require experienced physicians to
spend hours or even days for investigation. Self-supervised learning is a
potential solution to get rid of the strong requirement of training data by
deeply exploiting raw data information. In this paper, we propose a novel
self-supervised learning framework for volumetric medical images. Specifically,
we propose a context restoration task, i.e., Rubik's cube++, to pre-train 3D
neural networks. Different from the existing context-restoration-based
approaches, we adopt a volume-wise transformation for context permutation,
which encourages network to better exploit the inherent 3D anatomical
information of organs. Compared to the strategy of training from scratch,
fine-tuning from the Rubik's cube++ pre-trained weight can achieve better
performance in various tasks such as pancreas segmentation and brain tissue
segmentation. The experimental results show that our self-supervised learning
method can significantly improve the accuracy of 3D deep learning networks on
volumetric medical datasets without the use of extra data.Comment: Accepted by MICCAI 202
3D FCN Feature Driven Regression Forest-Based Pancreas Localization and Segmentation
This paper presents a fully automated atlas-based pancreas segmentation
method from CT volumes utilizing 3D fully convolutional network (FCN)
feature-based pancreas localization. Segmentation of the pancreas is difficult
because it has larger inter-patient spatial variations than other organs.
Previous pancreas segmentation methods failed to deal with such variations. We
propose a fully automated pancreas segmentation method that contains novel
localization and segmentation. Since the pancreas neighbors many other organs,
its position and size are strongly related to the positions of the surrounding
organs. We estimate the position and the size of the pancreas (localized) from
global features by regression forests. As global features, we use intensity
differences and 3D FCN deep learned features, which include automatically
extracted essential features for segmentation. We chose 3D FCN features from a
trained 3D U-Net, which is trained to perform multi-organ segmentation. The
global features include both the pancreas and surrounding organ information.
After localization, a patient-specific probabilistic atlas-based pancreas
segmentation is performed. In evaluation results with 146 CT volumes, we
achieved 60.6% of the Jaccard index and 73.9% of the Dice overlap.Comment: Presented in MICCAI 2017 workshop, DLMIA 2017 (Deep Learning in
Medical Image Analysis and Multimodal Learning for Clinical Decision Support
Recommended from our members
Registration of the endoluminal surfaces of the colon derived from prone and supine CT colonography
Purpose: Computed tomographic (CT) colonography is a relatively new technique for detecting bowel cancer or potentially precancerous polyps. CT scanning is combined with three-dimensional (3D) image reconstruction to produce a virtual endoluminal representation similar to optical colonoscopy. Because retained fluid and stool can mimic pathology, CT data are acquired with the bowel cleansed and insufflated with gas and patient in both prone and supine positions. Radiologists then match visually endoluminal locations between the two acquisitions in order to determine whether apparent pathology is real or not. This process is hindered by the fact that the colon, essentially a long tube, can undergo considerable deformation between acquisitions. The authors present a novel approach to automatically establish spatial correspondence between prone and supine endoluminal colonic surfaces after surface parameterization, even in the case of local colon collapse.Methods: The complexity of the registration task was reduced from a 3D to a 2D problem by mapping the surfaces extracted from prone and supine CT colonography onto a cylindrical parameterization. A nonrigid cylindrical registration was then performed to align the full colonic surfaces. The curvature information from the original 3D surfaces was used to determine correspondence. The method can also be applied to cases with regions of local colonic collapse by ignoring the collapsed regions during the registration.Results: Using a development set, suitable parameters were found to constrain the cylindrical registration method. Then, the same registration parameters were applied to a different set of 13 validation cases, consisting of 8 fully distended cases and 5 cases exhibiting multiple colonic collapses. All polyps present were well aligned, with a mean (+/- std. dev.) registration error of 5.7 (+/- 3.4) mm. An additional set of 1175 reference points on haustral folds spread over the full endoluminal colon surfaces resulted in an error of 7.7 (+/- 7.4) mm. Here, 82% of folds were aligned correctly after registration with a further 15% misregistered by just onefold.Conclusions: The proposed method reduces the 3D registration task to a cylindrical registration representing the endoluminal surface of the colon. Our algorithm uses surface curvature information as a similarity measure to drive registration to compensate for the large colorectal deformations that occur between prone and supine data acquisitions. The method has the potential to both enhance polyp detection and decrease the radiologist's interpretation time. (C) 2011 American Association of Physicists in Medicine. [DOI: 10.1118/1.3577603
EP-Net: Learning Cardiac Electrophysiology Models for Physiology-based Constraints in Data-Driven Predictions
International audienceCardiac electrophysiology (EP) models achieved good progress in simulating cardiac electrical activity. However numerical issues and computational times hamper clinical applicability of such models. Moreover , personalisation can still be challenging and model errors can be difficult to overcome. On the other hand, deep learning methods achieved impressive results but suffer from robustness issues in healthcare due to their lack of physiological knowledge. We propose a novel approach which is based on deep learning in order to replace numerical integration of partial differential equations. This has the advantage to directly learn spatio-temporal correlations, which increases stability. Moreover, once trained, solutions are very fast to compute. We present first results in state estimation based on few measurements and evaluate the forecasting power of the trained network. The proposed method performed very well on this preliminary evaluation. It opens up possibilities towards data-driven personalisation, to overcome model error by learning from the data
Weakly supervised segmentation from extreme points
Annotation of medical images has been a major bottleneck for the development
of accurate and robust machine learning models. Annotation is costly and
time-consuming and typically requires expert knowledge, especially in the
medical domain. Here, we propose to use minimal user interaction in the form of
extreme point clicks in order to train a segmentation model that can, in turn,
be used to speed up the annotation of medical images. We use extreme points in
each dimension of a 3D medical image to constrain an initial segmentation based
on the random walker algorithm. This segmentation is then used as a weak
supervisory signal to train a fully convolutional network that can segment the
organ of interest based on the provided user clicks. We show that the network's
predictions can be refined through several iterations of training and
prediction using the same weakly annotated data. Ultimately, our method has the
potential to speed up the generation process of new training datasets for the
development of new machine learning and deep learning-based models for, but not
exclusively, medical image analysis.Comment: Accepted at the MICCAI Workshop for Large-scale Annotation of
Biomedical data and Expert Label Synthesis, Shenzen, China, 201
Pelvis segmentation using multi-pass U-Net and iterative shape estimation
In this report, an automatic method for segmentation of the pelvis in three-dimensional (3D) computed tomography (CT) images is proposed. The method is based on a 3D U-net which has as input the 3D CT image and estimated volumetric shape models of the targeted structures and which returns the probability maps of each structure. During training, the 3D U-net is initially trained using blank shape context inputs to generate the segmentation masks, i.e. relying only on the image channel of the input. The preliminary segmentation results are used to estimate a new shape model, which is then fed to the same network again, with the input images. With the additional shape context information, the U-net is trained again to generate better segmentation results. During the testing phase, the input image is fed through the same 3D U-net multiple times, first with blank shape context channels and then with iteratively re-estimated shape models. Preliminary results show that the proposed multi-pass U-net with iterative shape estimation outperforms both 2D and 3D conventional U-nets without the shape model
- …